Goto

Collaborating Authors

 dialogue game


An Explanation-oriented Inquiry Dialogue Game for Expert Collaborative Recommendations

Shaheen, Qurat-ul-ain, Budzynska, Katarzyna, Sierra, Carles

arXiv.org Artificial Intelligence

This work presents a requirement analysis for collaborative dialogues among medical experts and an inquiry dialogue game based on this analysis for incorporating explainability into multiagent system design. The game allows experts with different knowledge bases to collaboratively make recommendations while generating rich traces of the reasoning process through combining explanation-based illocutionary forces in an inquiry dialogue. The dialogue game was implemented as a prototype web-application and evaluated against the specification through a formative user study. The user study confirms that the dialogue game meets the needs for collaboration among medical experts. It also provides insights on the real-life value of dialogue-based communication tools for the medical community.


Judicial Permission

Governatori, Guido, Rotolo, Antonino

arXiv.org Artificial Intelligence

This paper examines the significance of weak permissions in criminal trials (\emph{judicial permission}). It introduces a dialogue game model to systematically address judicial permissions, considering different standards of proof and argumentation semantics.


An Interleaving Semantics of the Timed Concurrent Language for Argumentation to Model Debates and Dialogue Games

Bistarelli, Stefano, Meo, Maria Chiara, Taticchi, Carlo

arXiv.org Artificial Intelligence

Time is a crucial factor in modelling dynamic behaviours of intelligent agents: activities have a determined temporal duration in a real-world environment, and previous actions influence agents' behaviour. In this paper, we propose a language for modelling concurrent interaction between agents that also allows the specification of temporal intervals in which particular actions occur. Such a language exploits a timed version of Abstract Argumentation Frameworks to realise a shared memory used by the agents to communicate and reason on the acceptability of their beliefs with respect to a given time interval. An interleaving model on a single processor is used for basic computation steps, with maximum parallelism for time elapsing. Following this approach, only one of the enabled agents is executed at each moment. To demonstrate the capabilities of language, we also show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents. Lastly, we present an implementation of the language that can be accessed via a web interface. Under consideration in Theory and Practice of Logic Programming (TPLP).


Dialogue Games for Benchmarking Language Understanding: Motivation, Taxonomy, Strategy

Schlangen, David

arXiv.org Artificial Intelligence

How does one measure "ability to understand language"? If it is a person's ability that is being measured, this is a question that almost never poses itself in an unqualified manner: Whatever formal test is applied, it takes place on the background of the person's language use in daily social practice, and what is measured is a specialised variety of language understanding (e.g., of a second language; or of written, technical language). Computer programs do not have this background. What does that mean for the applicability of formal tests of language understanding? I argue that such tests need to be complemented with tests of language use embedded in a practice, to arrive at a more comprehensive evaluation of "artificial language understanding". To do such tests systematically, I propose to use "Dialogue Games" -- constructed activities that provide a situational embedding for language use. I describe a taxonomy of Dialogue Game types, linked to a model of underlying capabilites that are tested, and thereby giving an argument for the \emph{construct validity} of the test. I close with showing how the internal structure of the taxonomy suggests an ordering from more specialised to more general situational language understanding, which potentially can provide some strategic guidance for development in this field.


Recommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue

Kang, Dongyeop, Balakrishnan, Anusha, Shah, Pararth, Crook, Paul, Boureau, Y-Lan, Weston, Jason

arXiv.org Artificial Intelligence

Traditional recommendation systems produce static rather than interactive recommendations invariant to a user's specific requests, clarifications, or current mood, and can suffer from the cold-start problem if their tastes are unknown. These issues can be alleviated by treating recommendation as an interactive dialogue task instead, where an expert recommender can sequentially ask about someone's preferences, react to their requests, and recommend more appropriate items. In this work, we collect a goal-driven recommendation dialogue dataset (GoRecDial), which consists of 9,125 dialogue games and 81,260 conversation turns between pairs of human workers recommending movies to each other. The task is specifically designed as a cooperative game between two players working towards a quantifiable common goal. We leverage the dataset to develop an end-to-end dialogue system that can simultaneously converse and recommend. Models are first trained to imitate the behavior of human players without considering the task goal itself (supervised training). We then finetune our models on simulated bot-bot conversations between two paired pre-trained models (bot-play), in order to achieve the dialogue goal. Our experiments show that models finetuned with bot-play learn improved dialogue strategies, reach the dialogue goal more often when paired with a human, and are rated as more consistent by humans compared to models trained without bot-play. The dataset and code are publicly available through the ParlAI framework.


A Grounded Interaction Protocol for Explainable Artificial Intelligence

Madumal, Prashan, Miller, Tim, Sonenberg, Liz, Vetere, Frank

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an interactive explanation to propose an interaction protocol. We follow a bottom-up approach to derive the model by analysing transcripts of different explanation dialogue types with 398 explanation dialogues. We use grounded theory to code and identify key components of an explanation dialogue. We formalize the model using the agent dialogue framework (ADF) as a new dialogue type and then evaluate it in a human-agent interaction study with 101 dialogues from 14 participants. Our results show that the proposed model can closely follow the explanation dialogues of human-agent conversations.


The computers being trained to beat you in an argument

BBC News

Humans are used to being outdone by computers when it comes to recalling facts, but they still have the upper hand in an argument. It has long been the case that machines can beat us in games of strategy like chess. And we have come to accept that artificial intelligence is best at analysing huge amounts of data - sifting through the supermarket receipts of millions of shoppers to work out who might be tempted by some vouchers for washing powder. But what if AI were able to handle the most human of tasks - navigating the minefield of subtle nuance, rhetoric and even emotions to take us on in an argument? It is a possibility that could help humans make better decisions and one which growing numbers of researchers are working on.


Electronic Law Journals - JILT 1998 (3) - Bench-Capon et al

AITopics Original Links

The effective use of argument is, of course, central to the practice of Law, and it is important that students of Law learn this skill. We describe here the architecture of a computer-based system to enable students to practice argumentation in a regulated environment. The system makes use of the concept of a dialogue game as a means of providing the necessary rule-governed structure for the conduct of an argument between two students, or a student and a teacher. The architecture described is generic in that it can be instantiated with different forms of dialogue game. This instantiation is achieved by the use of performatives to specify the rules of the game and the semantics of operations within the Dialogue Abstract Machine that is used to implement it.


Dialogues for proof search

Alama, Jesse

arXiv.org Artificial Intelligence

Dialogue games are a two-player semantics for a variety of logics, including intuitionistic and classical logic. Dialogues can be viewed as a kind of analytic calculus not unlike tableaux. Can dialogue games be an effective foundation for proof search in intuitionistic logic (both first-order and propositional)? We announce Kuno, an automated theorem prover for intuitionistic first-order logic based on dialogue games.